140 research outputs found
Sobolev inequalities and regularity of the linearized complex Monge-Ampere and Hessian equations
Let be a smooth, strictly -plurisubharmonic function on a bounded
domain with . The purpose of this paper
is to study the regularity of solution to the linearized complex Monge-Amp\`ere
and Hessian equations when the complex -Hessian of is bounded
from above and below. We first establish some estimates of Green's functions
associated to the linearized equations. Then we prove a class of new Sobolev
inequalities. With these inequalities, we use Moser's iteration to investigate
the a priori estimates of Hessian equations and their linearized equations, as
well as the K\"ahler scalar curvature equation. In particular, we obtain the
Harnack inequality for the linearized complex Monge-Amp\`ere and Hessian
equations under an extra integrability condition on the coefficients. The
approach works in both real and complex case.Comment: 34 pages. Comments welcome
DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation
Recent advances in 3D content creation mostly leverage optimization-based 3D
generation via score distillation sampling (SDS). Though promising results have
been exhibited, these methods often suffer from slow per-sample optimization,
limiting their practical usage. In this paper, we propose DreamGaussian, a
novel 3D content generation framework that achieves both efficiency and quality
simultaneously. Our key insight is to design a generative 3D Gaussian Splatting
model with companioned mesh extraction and texture refinement in UV space. In
contrast to the occupancy pruning used in Neural Radiance Fields, we
demonstrate that the progressive densification of 3D Gaussians converges
significantly faster for 3D generative tasks. To further enhance the texture
quality and facilitate downstream applications, we introduce an efficient
algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning
stage to refine the details. Extensive experiments demonstrate the superior
efficiency and competitive generation quality of our proposed approach.
Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes
from a single-view image, achieving approximately 10 times acceleration
compared to existing methods.Comment: project page: https://dreamgaussian.github.io
Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering
Mitigating the hallucinations of Large Language Models (LLMs) and enhancing
them is a crucial task. Although some existing methods employ model
self-enhancement techniques, they fall short of effectively addressing unknown
factual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails
to address the generalization across different KG sources and the enhancement
of open-ended answer questions simultaneously. To tackle these limitations,
there is a framework that combines Pseudo-Graph Generation and Atomic Knowledge
Verification proposed. The enhancement of LLM using KG in an open-ended
question-answering setting is implemented by leveraging the Pseudo-Graph
Generation. Atomic Knowledge Verification utilizes atomic-level knowledge
querying and verification to achieve generalizability under different KG
sources. Compared to the baseline, this approach yields a minimum improvement
of 11.5 in the ROUGE-L score for open-ended questions. For precise questions,
we observe a minimum accuracy improvement of 7.5. Moreover, there is also
demonstration that this framework exhibits generalizability across different KG
sources. In summary, our results pave the way for enhancing LLMs by
incorporating Pseudo- and Multisource-KGs, particularly in the context of
open-ended questions
Probing phase transition in neutron stars via the crust-core interfacial mode
Gravitational waves emitted from the binary neutron star (BNS) systems can
carry information about the dense matter phase in these compact stars. The
crust-core interfacial mode is an oscillation mode in a neutron star and it
depends mostly on the equation of the state of the matter in the crust-core
transition region. This mode can be resonantly excited by the tidal field of an
inspiraling-in BNS system, thereby affecting the emitted gravitational waves,
and hence could be used to probe the equation of state in the crust-core
transition region. In this work, we investigate in detail how the first-order
phase transition inside the neutron star affects the properties of the
crust-core interfacial mode, using a Newtonian fluid perturbation theory on a
general relativistic background solution of the stellar structure. Two possible
types of phase transitions are considered: (1) the phase transitions happen in
the fluid core but near the crust-core interface, which results in density
discontinuities; and (2) the strong interaction phase transitions in the dense
core (as in the conventional hybrid star case). These phase transitions'
impacts on interfacial mode properties are discussed. In particular, the former
phase transition has a minor effect on the M-R relation and the adiabatic tidal
deformability, but can significantly affect the interfacial mode frequency and
thereby could be probed using gravitational waves. For the BNS systems, we
discuss the possible observational signatures of these phase transitions in the
gravitational waveforms and their detectability. Our work enriches the
exploration of the physical properties of the crust-core interfacial mode and
provides a promising method for probing the phase transition using the
seismology of a compact star.Comment: 18 pages, 14 figure
Accelerated Federated Learning with Decoupled Adaptive Optimization
The federated learning (FL) framework enables edge clients to collaboratively
learn a shared inference model while keeping privacy of training data on
clients. Recently, many heuristics efforts have been made to generalize
centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc.,
to federated settings for improving convergence and accuracy. However, there is
still a paucity of theoretical principles on where to and how to design and
utilize adaptive optimization methods in federated settings. This work aims to
develop novel adaptive optimization methods for FL from the perspective of
dynamics of ordinary differential equations (ODEs). First, an analytic
framework is established to build a connection between federated optimization
methods and decompositions of ODEs of corresponding centralized optimizers.
Second, based on this analytic framework, a momentum decoupling adaptive
optimization method, FedDA, is developed to fully utilize the global momentum
on each local iteration and accelerate the training convergence. Last but not
least, full batch gradients are utilized to mimic centralized optimization in
the end of the training process to ensure the convergence and overcome the
possible inconsistency caused by adaptive optimization methods
Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization
Federated learning (FL) is a promising paradigm to enable collaborative model
training with decentralized data. However, the training process of Large
Language Models (LLMs) generally incurs the update of significant parameters,
which limits the applicability of FL techniques to tackle the LLMs in real
scenarios. Prompt tuning can significantly reduce the number of parameters to
update, but it either incurs performance degradation or low training
efficiency. The straightforward utilization of prompt tuning in the FL often
raises non-trivial communication costs and dramatically degrades performance.
In addition, the decentralized data is generally non-Independent and
Identically Distributed (non-IID), which brings client drift problems and thus
poor performance. This paper proposes a Parameter-efficient prompt Tuning
approach with Adaptive Optimization, i.e., FedPepTAO, to enable efficient and
effective FL of LLMs. First, an efficient partial prompt tuning approach is
proposed to improve performance and efficiency simultaneously. Second, a novel
adaptive optimization method is developed to address the client drift problems
on both the device and server sides to enhance performance further. Extensive
experiments based on 10 datasets demonstrate the superb performance (up to
60.8\% in terms of accuracy) and efficiency (up to 97.59\% in terms of training
time) of FedPepTAO compared with 9 baseline approaches. Our code is available
at https://github.com/llm-eff/FedPepTAO.Comment: 18 pages, accepted by EMNLP 202
- …